perm filename EASY[E85,JMC] blob
sn#801112 filedate 1985-09-04 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00013 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00003 00002 easy[e85,jmc] Easy unsolved problems of AI (for IJCAI 1985)
C00005 00003 Outline of talk
C00006 00004 \centerline{\bf The logic based approach}
C00007 00005 \centerline{\bf References}
C00008 00006 The common sense database
C00009 00007 The effects of actions and other events
C00012 00008 Dead birds don't fly
C00017 00009 Representing control information or using information for control purposes
C00027 00010 Natural kinds
C00029 00011 \centerline{\bf Elaboration tolerance:}
C00030 00012 \centerline{\bf Ambiguity tolerance:}
C00032 00013 What is simplest to believe?
C00033 ENDMK
C⊗;
easy[e85,jmc] Easy unsolved problems of AI (for IJCAI 1985)
Sometimes I wish I was in Siberia.
What is easy for humans? I mean very easy.
in situation calculus
towers
What about designs for towers, towers actually present, towers under
construction, damaged or destroyed towers.
an epistemologically adequate formalism for what we
actually know about stacking objects. What do we know that
makes us build towers bottom up? When do humans make errors
of ordering actions and when don't we?
concurrent actions and events
contexts
heuristics
What are some additional normal assumptions?
ceteris paribus de re = de dicto
Normally whether one wants to go to a concert depends on who's playing
but not on who is selling tickets.
What is the format of information in a common sense database?
What is the knowledge that common sense reasoning depends on but
that is not readily expressed in natural language?
How can we keep the formalism open ended at the fundamental level?
Will non-monotonic reasoning do it by itself? Probably not.
What about NMR + explicit contexts? It looks hopeful.
Outline of talk
1. The logic-based approach to AI
2. I am encouraged by progress and the prospects of progress.
The problems:
3. Perhaps there will be some merger of the expert systems approach
and the logic-based approach.
\centerline{\bf The logic based approach}
Represent information about the world in logical languages (mostly
first order).
Use modalities but reify instead of using modal logic.
Use circumscription or other formalized non-monotonic reasoning.
\centerline{\bf References}
Programs with Common Sense
Phil. prob.
Some expert programs need ...
Circumscription
VAL on c.
Hobbs and Moore
The common sense database
Many of the problems I will discuss naturally center around making
a general database of common sense knowledge about the world. ``General''
means that once the facts about a domain, e.g. the effects of the actions
of moving a physical object, are entered in the database, then an
arbitrary program will be able to use these facts. This won't be easy;
present axiomatizations are ad hoc to specific programs.
The effects of actions and other events
In my view the most fundamental common sense knowledge
is that by which we predict the effects of events, and the most
important class of events are actions. The common sense database
must contain a lot of information of this kind.
Let me begin by denouncing an oversimplified view. This
view says that we predict the effects of actions by simulating
them in our heads, and that this is all the information we need
about the effects of actions. Indeed simulation is of some use,
but it isn't all the robot will need, and some ways of prediction
that seem to be simulation really aren't quite.
Suppose a person is planning an airplane trip, and let's
take the view of it most like simulation. He thinks to himself,
I'll take PSA 615 from L.A. to Chicago and then Panam 2 to Dallas.
He concludes that that isn't any good, because it will take
longer than other routes. He certainly didn't simulate and measure
the time taken by the simulation. Instead he did arithmetic and
added the flight times. For our robot to do the same it needs
more than a simulation program.
Secondly, we can and robots must do general reasoning about actions.
All flights from L.A. to New York take more than four hours.
Having disposed of mere simulation, we are ready to face
the problem of how to represent information about the effects
of actions.
One such formalism is the ``situation calculus'' proposed
in (McCarthy and Hayes 1969).
Dead birds don't fly
As long as we are willing to put the fact that dead birds
don't normally fly in the part of our database concerned with
birds and flying there is no problem. We have
∀x.dead x ⊃ ab aspect2 x
and
∀x.bird x ∧ ¬ab aspect7 x ⊃ ¬flies x.
However, when we want to include more information, this
becomes inefficient. Not only don't dead birds fly, they don't
sing, lay eggs or eat. On the other hand, if a bird is black
in color, it normally remains black when dead.
The partial solution of the problem when we are willing
to treat dead birds not flying in a special way, should not be forgotten.
It might be all we know how to do now, and even when we can do better
in general, if a program has to think a lot about whether a bird
flies, it may be useful to keep this information in the database
even though it is a consequence of more general information.
In general, however, we know that dead animals don't act,
birds are animals, and flying is an action. We can get by making
this an absolute, but suppose we want to allow for zombies.
Certainly a child is prepared to treat being informed (misinformed)
about the existence of zombies in the same way as he treats any
other information about exceptions to his previously learned
generalizations. Therefore, we should axiomatize death in a way
that interacts properly with actions of all kinds and which has
the requisite generality.
Remark: Minsky's bird with its feet encased in concrete may offer
additional difficulty. Since this is an ad hoc phenomenon,
there is no information about the ability of entities with
their feet encased in concrete to fly. It has to follow
from an understanding of mechanism. What isn't clear is
how the priorities of the mechanism are established.
From Vladimir:
How about this:
∀x y.bird x ∧ activity y ∧ ¬ ab aspect2(x,y) ⊃ does(x,y)
activity flying
activity singing
................
∀x y.dead x ∧ activity y ⊃ ab aspect2(x,y)
∀x y.dead x ∧ activity y ∧ ¬ ab aspect7(x,y) ⊃ ¬ does(x,y)
Query: To what extent must the axiomatization of an exception specify
what it is an exception to?
∀x y bird x ∧ activity y ∧ do(birds,y) ∧ ¬ab aspect2(x,y) ⊃ does(x,y)
activity flying
do(birds,flying)
activity singing
do(birds,singing)
activity formalizing
∀x y.dead x ∧ activity y ⊃ ab aspect2(x,y)
∀x y.dead x ∧ activity y ∧ ¬ab aspect7(x,y) ⊃ ¬ does(x,y)
∀x y.person x ∧ activity y
The problem of compiling down generalizations. The axioms concerning birds
that we had before are consequences of the more general rules. However,
we humans think of them first in specialized form and then can formulate
generalizations. Maybe programs can do this too.
Representing control information or using information for control purposes
The blocks world has been used as a sample domain for planning
programs that achieve conjunctions of goals. However, none of
the programs discussed in the Handbook of AI make use of the
following simple fact. In building a tower of blocks, it is
pointless to achieve the goal of putting block A on block B unless
the tower from B down is complete.
We have to formulate the following:
1. A goal should be postponed until it won't have to be undone in
order to achieve other goals.
2. A block in a position above blocks not in final position has
to be moved.
3. If only the top block is movable, then lower blocks cannot
be moved until the top block has been moved. This is declarative
information in the sense that it can be used for a variety of
purposes, e.g. to prevent moving lower blocks or to signal attempt
to get at lower blocks by attaching an alarm to the top block.
Remark: This informal formulation contains references to the past.
We must either use a language that allows references to the past
or find a way of expressing these principles without such reference.
The former seems preferable.
Suppose we have a conjunctive goal p1∧p2∧p3 with the requirement
that p3 has to be undone to achieve p1 and p2 if they aren't
already true, etc. We should rewrite the goal as r1∧r2∧r3, where
r1 = p1, r2 = p1∧p2, and r3 = p1∧p2∧p3. Now achieving r2 has
r1 as a precondition and achieving r3 has r2 as a precondition,
and the interferences have been eliminated. Thus the object is
redefine the goals so taat the achievement of any goal does not
prevent the achievement of others although the achievement of
some goals may have preconditions that involve others positively.
Notice that the present formulation is propositional. Therefore,
it is worth thinking about instantiating the universally quantified
axioms sufficiently so that the problem becomes propositional. Then
we can have propositional algorithms for reformulating (see Devika)
so that all preconditions are positive. When can a problem be
given a positive reformulation.
Propositional planning
A state is characterized by the truth and falsity of a set
of propositional variables. In general there may be dependence
relations among the variables, so that not all tuples of values are possible.
A subset of variables is called statewise independent if all tuples of
values are possible.
An action has a precondition which is
a boolean function of the variables.
An action has an effect of changing the state. In general there may
be many possible new states, but we'll call the system determinate
if there is a unique successor state. The most general rule for
the new state that results from an action is hard to express. The
STRIPS subcase is important. It involves certain variables becoming
true and others becoming false, i.e. according to add and delete lists.
The simplest case is when all the preconditions are positive,
no actions have delete lists, and the final goal is positive.
It is still simpler if all the above are conjunctions. Let's
call this case C0. C0 is solvable by looking for a goal
conjunct or precondition conjunct that is immediately doable.
The number of steps to a solution is at most the number of
variables. The condition that no actions have delete lists
seems too strong. Is there a weaker condition that still leaves
goals easy to achieve? See the non-Boolean system formulation.
We can transform a problem by introducing new variables that are Boolean
combinations of old variables. We can rewrite the initial state, the
preconditions for actions, the final goals and the constraints in terms of
the enlarged set of variables using the constraints as axioms. A variable
can be eliminated if it becomes redundant in that it is no longer used in
expressing the initial state, the preconditions or the final goal and
appears only definitionally in the constraints.
Adding new variables, rewriting conditions and eliminating redundant
variables transforms a problem. One goal is to transform a problem
into a C0 problem, because they are so easy to solve. Let's call
a problem C1 if it is transformable to C0. Whether this can be
done may not be easy to determine.
This formulation doesn't refer to the past. Perhaps some systems
can be expressed more simply if we allow reference to the past.
Even if the system cannot be expressed more simply, perhaps referring
to the past and future simplifies expression of the strategies.
A simple strategy is a conditional expression giving the action
to be performed as a function of the state.
One can think generally about this formal system, but it is probably
most fruitful to see if it applies to block stacking problems including
the tower of Hanoi.
A system may be shrunk by adding a constraint. This reduces the
collection of moves available for solving it, but it may facilitate
transformation to a simpler form.
We may also reduce the set of actions with the same effect and intent. We
may also introduce macro-actions, and these may facilitate transformation.
To what extent are we just re-inventing part of dynamic logic?
Has dynamic logic been applied to block problems?
Non-Boolean system formulation.
Instead of using Boolean variables, we can have variables
that take on a finite set of values. An action changes the values
of certain variables.
We also have the possibility of non-monotonic formulations
of the states or preconditions for actions.
Perhaps it's as simple as adding to the preconditions for a goal
not merely the conditions on which achieving the goal is feasible
but also conditions under which achieving the goal will be stable.
Natural kinds
Humans often jump to the conclusion that there are things
yet unknown in common among entities distinguished in certain
ways. We are prepared to learn more about lemons, hills, marriages
or models of theories. Children do this automatically in when
the begin to learn words. They automatically presume that
the word names something about which there is much more to
learn. Notice that this is quite different from learning
a definition, where all properties of the entity in question
are consequences of the definition.
How shall AI systems be designed so as to be able to learn about
new natural kinds?
∀x.(nat_kind(x) ⊃ ∃y.propert(y,x) ∧ ¬knows(I,Property(y,x)))
\centerline{\bf Elaboration tolerance:}
\centerline{\bf Ambiguity tolerance:}
A concept may be used for a long time before ambiguous cases are
found. We humans are puzzled by the ambiguity, but we don't lose
our ability to apply the concept where it is unambiguous and
can accept various resolutions of the ambiguity. The common
sense data base must similarly tolerate ambiguity, because it
is unreasonable to require that the people who express
their common sense knowledge the formal language of the database
resolve all the ambiguities that philosophers have discovered
and will discover in the future. Otherwise, we couldn't begin
do AI until philosophy was finished.
What is simplest to believe?
When we take into account a collection of facts, we draw non-monotonic
conclusions that are true in the simplest models of the facts.
What are the actual rules for this?
To what extent can non-monotonically used axioms be about the world
and to what extent must they be about the theory being used?